Privacy-Preserved Distributed Learning With Zeroth-Order Optimization
نویسندگان
چکیده
We develop a privacy-preserving distributed algorithm to minimize regularized empirical risk function when the first-order information is not available and data over multi-agent network. employ zeroth-order method associated augmented Lagrangian in primal domain using alternating direction of multipliers (ADMM). show that proposed algorithm, named ADMM (D-ZOA), has intrinsic properties. Most existing optimization/estimation algorithms exploit some perturbation mechanism preserve privacy, which comes at cost reduced accuracy. Contrarily, by analyzing inherent randomness due use method, we D-ZOA intrinsically endowed with $(\epsilon,\delta)-$ differential privacy. In addition, moments accountant total privacy leakage grows sublinearly number iterations. outperforms differentially-private approaches terms accuracy while yielding similar guarantee. prove reaches neighborhood optimal solution whose size depends on parameter. The convergence analysis also reveals practically important trade-off between Simulation results verify desirable properties its superiority state-of-the-art as well network-wide convergence.
منابع مشابه
Stochastic Zeroth-order Optimization in High Dimensions
We consider the problem of optimizing a high-dimensional convex function using stochastic zeroth-order queries. Under sparsity assumptions on the gradients or function values, we present two algorithms: a successive component/feature selection algorithm and a noisy mirror descent algorithm using Lasso gradient estimates, and show that both algorithms have convergence rates that depend only loga...
متن کاملOn Zeroth-Order Stochastic Convex Optimization via Random Walks
We propose a method for zeroth order stochastic convex optimization that attains the suboptimality rate of Õ(n7T−1/2) after T queries for a convex bounded function f : R → R. The method is based on a random walk (the Ball Walk) on the epigraph of the function. The randomized approach circumvents the problem of gradient estimation, and appears to be less sensitive to noisy function evaluations c...
متن کاملZeroth Order Nonconvex Multi-Agent Optimization over Networks
In this paper we consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbors. Differently from all existing works on distributed optimization, our focus is given to optimizing a class of difficult non-convex problems, and under the challenging setti...
متن کاملPrivacy and Regression Model Preserved Learning
Sensitive data such as medical records and business reports usually contains valuable information that can be used to build prediction models. However, designing learning models by directly using sensitive data might result in severe privacy and copyright issues. In this paper, we propose a novel matrix completion based framework that aims to tackle two challenging issues simultaneously: i) han...
متن کاملDeterminants of Zeroth Order Operators
For compact Riemannian manifolds all of whose geodesics are closed (aka Zoll manifolds) one can define the determinant of a zeroth order pseudodifferential operator by mimicking Szego’s definition of this determinant for the operator: multiplication by a bounded function, on the Hilbert space of square-integrable functions on the circle. In this paper we prove that the non-local contribution to...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Forensics and Security
سال: 2022
ISSN: ['1556-6013', '1556-6021']
DOI: https://doi.org/10.1109/tifs.2021.3139267